11,011 research outputs found

    Analysis of Linsker's simulations of Hebbian rules

    Get PDF
    Linsker has reported the development of center-surround receptive fields and oriented receptive fields in simulations of a Hebb-type equation in a linear network. The dynamics of the learning rule are analyzed in terms of the eigenvectors of the covariance matrix of cell activities. Analytic and computational results for Linsker's covariance matrices, and some general theorems, lead to an explanation of the emergence of center-surround and certain oriented structures. We estimate criteria for the parameter regime in which center-surround structures emerge

    The Role of Constraints in Hebbian Learning

    Get PDF
    Models of unsupervised, correlation-based (Hebbian) synaptic plasticity are typically unstable: either all synapses grow until each reaches the maximum allowed strength, or all synapses decay to zero strength. A common method of avoiding these outcomes is to use a constraint that conserves or limits the total synaptic strength over a cell. We study the dynamic effects of such constraints. Two methods of enforcing a constraint are distinguished, multiplicative and subtractive. For otherwise linear learning rules, multiplicative enforcement of a constraint results in dynamics that converge to the principal eigenvector of the operator determining unconstrained synaptic development. Subtractive enforcement, in contrast, typically leads to a final state in which almost all synaptic strengths reach either the maximum or minimum allowed value. This final state is often dominated by weight configurations other than the principal eigenvector of the unconstrained operator. Multiplicative enforcement yields a “graded” receptive field in which most mutually correlated inputs are represented, whereas subtractive enforcement yields a receptive field that is “sharpened” to a subset of maximally correlated inputs. If two equivalent input populations (e.g., two eyes) innervate a common target, multiplicative enforcement prevents their segregation (ocular dominance segregation) when the two populations are weakly correlated; whereas subtractive enforcement allows segregation under these circumstances. These results may be used to understand constraints both over output cells and over input cells. A variety of rules that can implement constrained dynamics are discussed

    Diffraction-limited CCD imaging with faint reference stars

    Get PDF
    By selecting short exposure images taken using a CCD with negligible readout noise we obtained essentially diffraction-limited 810 nm images of faint objects using nearby reference stars brighter than I=16 at a 2.56 m telescope. The FWHM of the isoplanatic patch for the technique is found to be 50 arcseconds, providing ~20% sky coverage around suitable reference stars.Comment: 4 page letter accepted for publication in Astronomy and Astrophysic

    Error correcting code using tree-like multilayer perceptron

    Full text link
    An error correcting code using a tree-like multilayer perceptron is proposed. An original message \mbi{s}^0 is encoded into a codeword \boldmath{y}_0 using a tree-like committee machine (committee tree) or a tree-like parity machine (parity tree). Based on these architectures, several schemes featuring monotonic or non-monotonic units are introduced. The codeword \mbi{y}_0 is then transmitted via a Binary Asymmetric Channel (BAC) where it is corrupted by noise. The analytical performance of these schemes is investigated using the replica method of statistical mechanics. Under some specific conditions, some of the proposed schemes are shown to saturate the Shannon bound at the infinite codeword length limit. The influence of the monotonicity of the units on the performance is also discussed.Comment: 23 pages, 3 figures, Content has been extended and revise

    An inverse problem of reconstructing the electrical and geometrical parameters characterising airframe structures and connector interfaces

    Get PDF
    This article is concerned with the detection of environmental ageing in adhesively bonded structures used in the aircraft industry. Using a transmission line approach a forward model for the reflection coefficients is constructed and is shown to have an analytic solution in the case of constant permeability and permittivity. The inverse problem is analysed to determine necessary conditions for a unique recovery. The main thrust of this article then involves modelling the connector and then experimental rigs are built for the case of the air-filled line to enable the connector parameters to be identified and the inverse solver to be tested. Some results are also displayed for the dielectric-filled line

    Comprehensive cosmographic analysis by Markov Chain Method

    Full text link
    We study the possibility to extract model independent information about the dynamics of the universe by using Cosmography. We intend to explore it systematically, to learn about its limitations and its real possibilities. Here we are sticking to the series expansion approach on which Cosmography is based. We apply it to different data sets: Supernovae Type Ia (SNeIa), Hubble parameter extracted from differential galaxy ages, Gamma Ray Bursts (GRBs) and the Baryon Acoustic Oscillations (BAO) data. We go beyond past results in the literature extending the series expansion up to the fourth order in the scale factor, which implies the analysis of the deceleration, q_{0}, the jerk, j_{0} and the snap, s_{0}. We use the Markov Chain Monte Carlo Method (MCMC) to analyze the data statistically. We also try to relate direct results from Cosmography to dark energy (DE) dynamical models parameterized by the Chevalier-Polarski-Linder (CPL) model, extracting clues about the matter content and the dark energy parameters. The main results are: a) even if relying on a mathematical approximate assumption such as the scale factor series expansion in terms of time, cosmography can be extremely useful in assessing dynamical properties of the Universe; b) the deceleration parameter clearly confirms the present acceleration phase; c) the MCMC method can help giving narrower constraints in parameter estimation, in particular for higher order cosmographic parameters (the jerk and the snap), with respect to the literature; d) both the estimation of the jerk and the DE parameters, reflect the possibility of a deviation from the LCDM cosmological model.Comment: 24 pages, 7 figure

    Looking Good With Flickr Faves: Gaussian Processes for Finding Difference Makers in Personality Impressions

    Get PDF
    Flickr allows its users to generate galleries of "faves", i.e., pictures that they have tagged as favourite. According to recent studies, the faves are predictive of the personality traits that people attribute to Flickr users. This article investigates the phenomenon and shows that faves allow one to predict whether a Flickr user is perceived to be above median or not with respect to each of the Big-Five Traits (accuracy up to 79\% depending on the trait). The classifier - based on Gaussian Processes with a new kernel designed for this work - allows one to identify the visual characteristics of faves that better account for the prediction outcome

    Photon counting strategies with low light level CCDs

    Full text link
    Low light level charge coupled devices (L3CCDs) have recently been developed, incorporating on-chip gain. They may be operated to give an effective readout noise much less than one electron by implementing an on-chip gain process allowing the detection of individual photons. However, the gain mechanism is stochastic and so introduces significant extra noise into the system. In this paper we examine how best to process the output signal from an L3CCD so as to minimize the contribution of stochastic noise, while still maintaining photometric accuracy. We achieve this by optimising a transfer function which translates the digitised output signal levels from the L3CCD into a value approximating the photon input as closely as possible by applying thresholding techniques. We identify several thresholding strategies and quantify their impact on photon counting accuracy and effective signal-to-noise. We find that it is possible to eliminate the noise introduced by the gain process at the lowest light levels. Reduced improvements are achieved as the light level increases up to about twenty photons per pixel and above this there is negligible improvement. Operating L3CCDs at very high speeds will keep the photon flux low, giving the best improvements in signal-to-noise ratio.Comment: 7 pages, accepted by MNRA

    Planck priors for dark energy surveys

    Get PDF
    Although cosmic microwave background (CMB) anisotropy data alone cannot constrain simultaneously the spatial curvature and the equation of state of dark energy, CMB data provide a valuable addition to other experimental results. However computing a full CMB power spectrum with a Boltzmann code is quite slow; for instance if we want to work with many dark energy and/or modified gravity models, or would like to optimize experiments where many different configurations need to be tested, it is possible to adopt a quicker and more efficient approach. In this paper we consider the compression of the projected Planck CMB data into four parameters, R (scaled distance to last scattering surface), l_a (angular scale of sound horizon at last scattering), Omega_b h^2 (baryon density fraction) and n_s (powerlaw index of primordial matter power spectrum), all of which can be computed quickly. We show that, although this compression loses information compared to the full likelihood, such information loss becomes negligible when more data is added. We also demonstrate that the method can be used for scalar field dark energy independently of the parametrisation of the equation of state, and discuss how this method should be used for other kinds of dark energy models.Comment: 8 pages, 3 figures, 4 table
    • …
    corecore